291 research outputs found

    Zerotree design for image compression: toward weighted universal zerotree coding

    Get PDF
    We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Optimal multiple description and multiresolution scalar quantizer design

    Get PDF
    The author presents new algorithms for fixed-rate multiple description and multiresolution scalar quantizer design. The algorithms both run in time polynomial in the size of the source alphabet and guarantee globally optimal solutions. To the author's knowledge, these are the first globally optimal design algorithms for multiple description and multiresolution quantizers

    Conditional weighted universal source codes: second order statistics in universal coding

    Get PDF
    We consider the use of second order statistics in two-stage universal source coding. Examples of two-stage universal codes include the weighted universal vector quantization (WUVQ), weighted universal bit allocation (WUBA), and weighted universal transform coding (WUTC) algorithms. The second order statistics are incorporated in two-stage universal source codes in a manner analogous to the method by which second order statistics are incorporated in entropy constrained vector quantization (ECVQ) to yield conditional ECVQ (CECVQ). In this paper, we describe an optimal two-stage conditional entropy constrained universal source code along with its associated optimal design algorithm and a fast (but nonoptimal) variation of the original code. The design technique and coding algorithm here presented result in a new family of conditional entropy constrained universal codes including but not limited to the conditional entropy constrained WUVQ (CWUVQ), the conditional entropy constrained WUBA (CWUBA), and the conditional entropy constrained WUTC (CWUTC). The fast variation of the conditional entropy constrained universal codes allows the designer to trade off performance gains against storage and delay costs. We demonstrate the performance of the proposed codes on a collection of medical brain scans. On the given data set, the CWUVQ achieves up to 7.5 dB performance improvement over variable-rate WUVQ and up to 12 dB performance improvement over ECVQ. On the same data set, the fast variation of the CWUVQ achieves identical performance to that achieved by the original code at all but the lowest rates (less than 0.125 bits per pixel)

    Lossless source coding for multiple access networks

    Get PDF
    A multiple access source code (MASC) is a source code designed for the following network configuration: a pair of jointly distributed information sequences {Xi}i=1∞ and {Yi}i=1∞ is drawn i.i.d. according to joint probability mass function (p.m.f.) p(x,y); the encoder for each source operates without knowledge of the other source; the decoder receives the encoded bit streams of both sources. The rate region for MASCs with arbitrarily small but non-zero error probabilities was studied by Slepian and Wolf. In this paper, we consider the properties of optimal truly lossless MASCs and apply our findings to practical truly lossless and near lossless code design

    On the Separation of Lossy Source-Network Coding and Channel Coding in Wireline Networks

    Get PDF
    This paper proves the separation between source-network coding and channel coding in networks of noisy, discrete, memoryless channels. We show that the set of achievable distortion matrices in delivering a family of dependent sources across such a network equals the set of achievable distortion matrices for delivering the same sources across a distinct network which is built by replacing each channel by a noiseless, point-to-point bit-pipe of the corresponding capacity. Thus a code that applies source-network coding across links that are made almost lossless through the application of independent channel coding across each link asymptotically achieves the optimal performance across the network as a whole.Comment: 5 pages, to appear in the proceedings of 2010 IEEE International Symposium on Information Theory (ISIT

    Communication and distributional complexity of joint probability mass functions

    Get PDF
    The problem of truly-lossless (Pe = 0) distributed source coding [1] requires knowledge of the joint statistics of the sources. In particular the locations of the zeroes of the probability mass functions (pmfs) are crucial for encoding at rates below (H(X),H(Y)) [2]. We consider the distributed computation of the empirical joint pmf Pn of a sequence of random variable pairs observed at physically separated nodes of a network. We consider both worst-case and average measures of information exchange and treat both exact calculation of Pn and a notion of approximation. We find that in all cases the communication cost grows linearly with the size of the input. Further, we consider the problem of determining whether the empirical pmf has a zero in a particular location and show that in most cases considered this also requires a communication cost that is linear in the input size

    Improved bounds for the rate loss of multiresolution source codes

    Get PDF
    We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/<0.7250; (c) tighten the Lastras-Berger bound for scenario (i) from L/sub i//spl les/1/2 to L/sub i/<0.3802, i/spl isin/{1,2}; and (d) generalize the bounds for scenarios (ii) and (iii) to M-resolution codes with M/spl ges/2. We also present upper bounds for the rate losses of additive MRSCs (AMRSCs). An AMRSC is a special MRSC where each resolution describes an incremental reproduction and the kth-resolution reconstruction equals the sum of the first k incremental reproductions. We obtain two bounds on the rate loss of AMRSCs: one primarily good for low-rate coding and another which depends on the source entropy
    corecore